Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Urology ; 174: 58-63, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36736916

RESUMO

OBJECTIVE: To improve upon prior attempts to predict which patients will pass their obstructing ureteral stones, we developed a machine learning algorithm to predict the passage of obstructing ureteral stones using only the CT scan at a patient's initial presentation. METHODS: We obtained Institutional Review Board approval to conduct a retrospective study by extracting data from all patients with an obstructing 3-10 mm ureteral stone. We included patients with sufficient data to be categorized as having either passed or failed to pass an obstructing ureteral stone. We developed a 3D-convolutional neural network (CNN) model using a dynamic learning rate, the Adam optimizer, and early stopping with 10-fold cross-validation. Using this model, we calculated the area under the curve (AUC) and developed a model confusion matrix, which we compared with a model based only on the largest dimension of the stone. RESULTS: A total of 138 patients met inclusion criteria and had adequate images that could be preprocessed and included in the study. Seventy patients failed to pass their ureteral stones, and 68 patients passed their stones. For the 3D-CNN model, the mean AUC was 0.95 with an overall mean sensitivity of 95% and mean specificity of 77%, which outperformed the model based on stone-size. CONCLUSION: The 3D-CNN model predicts which patients will pass their obstructing ureteral stones based on CT scan alone and does not require any further measurements. This can provide useful clinical information which may help obviate the need for a delay in care for patients who inevitably require surgical intervention.


Assuntos
Cálculos Ureterais , Humanos , Cálculos Ureterais/complicações , Cálculos Ureterais/diagnóstico por imagem , Cálculos Ureterais/cirurgia , Inteligência Artificial , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Computadores
2.
J Med Internet Res ; 23(9): e25837, 2021 09 29.
Artigo em Inglês | MEDLINE | ID: mdl-34586074

RESUMO

BACKGROUND: Digital health agents - embodied conversational agents designed specifically for health interventions - provide a promising alternative or supplement to behavioral health services by reducing barriers to access to care. OBJECTIVE: Our goals were to (1) develop an expressive, speech-enabled digital health agent operating in a 3-dimensional virtual environment to deliver a brief behavioral health intervention over the internet to reduce alcohol use and to (2) understand its acceptability, feasibility, and utility with its end users. METHODS: We developed an expressive, speech-enabled digital health agent with facial expressions and body gestures operating in a 3-dimensional virtual office and able to deliver a brief behavioral health intervention over the internet to reduce alcohol use. We then asked 51 alcohol users to report on the digital health agent acceptability, feasibility, and utility. RESULTS: The developed digital health agent uses speech recognition and a model of empathetic verbal and nonverbal behaviors to engage the user, and its performance enabled it to successfully deliver a brief behavioral health intervention over the internet to reduce alcohol use. Descriptive statistics indicated that participants had overwhelmingly positive experiences with the digital health agent, including engagement with the technology, acceptance, perceived utility, and intent to use the technology. Illustrative qualitative quotes provided further insight about the potential reach and impact of digital health agents in behavioral health care. CONCLUSIONS: Web-delivered interventions delivered by expressive, speech-enabled digital health agents may provide an exciting complement or alternative to traditional one-on-one treatment. They may be especially helpful for hard-to-reach communities with behavioral workforce shortages.


Assuntos
Alcoolismo , Entrevista Motivacional , Consumo de Bebidas Alcoólicas , Estudos de Viabilidade , Humanos , Fala
3.
Methods Mol Biol ; 1939: 49-69, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30848456

RESUMO

Technological advancements in many fields have led to huge increases in data production, including data volume, diversity, and the speed at which new data is becoming available. In accordance with this, there is a lack of conformity in the ways data is interpreted. This era of "big data" provides unprecedented opportunities for data-driven research and "big picture" models. However, in-depth analyses-making use of various data types and data sources and extracting knowledge-have become a more daunting task. This is especially the case in life sciences where simplification and flattening of diverse data types often lead to incorrect predictions. Effective applications of big data approaches in life sciences require better, knowledge-based, semantic models that are suitable as a framework for big data integration, while avoiding oversimplifications, such as reducing various biological data types to the gene level. A huge hurdle in developing such semantic knowledge models, or ontologies, is the knowledge acquisition bottleneck. Automated methods are still very limited, and significant human expertise is required. In this chapter, we describe a methodology to systematize this knowledge acquisition and representation challenge, termed KNowledge Acquisition and Representation Methodology (KNARM). We then describe application of the methodology while implementing the Drug Target Ontology (DTO). We aimed to create an approach, involving domain experts and knowledge engineers, to build useful, comprehensive, consistent ontologies that will enable big data approaches in the domain of drug discovery, without the currently common simplifications.


Assuntos
Big Data , Ontologias Biológicas , Descoberta de Drogas/métodos , Web Semântica , Animais , Bases de Dados Factuais , Humanos , Terapia de Alvo Molecular
4.
J Biomed Semantics ; 8(1): 50, 2017 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-29122012

RESUMO

BACKGROUND: One of the most successful approaches to develop new small molecule therapeutics has been to start from a validated druggable protein target. However, only a small subset of potentially druggable targets has attracted significant research and development resources. The Illuminating the Druggable Genome (IDG) project develops resources to catalyze the development of likely targetable, yet currently understudied prospective drug targets. A central component of the IDG program is a comprehensive knowledge resource of the druggable genome. RESULTS: As part of that effort, we have developed a framework to integrate, navigate, and analyze drug discovery data based on formalized and standardized classifications and annotations of druggable protein targets, the Drug Target Ontology (DTO). DTO was constructed by extensive curation and consolidation of various resources. DTO classifies the four major drug target protein families, GPCRs, kinases, ion channels and nuclear receptors, based on phylogenecity, function, target development level, disease association, tissue expression, chemical ligand and substrate characteristics, and target-family specific characteristics. The formal ontology was built using a new software tool to auto-generate most axioms from a database while supporting manual knowledge acquisition. A modular, hierarchical implementation facilitate ontology development and maintenance and makes use of various external ontologies, thus integrating the DTO into the ecosystem of biomedical ontologies. As a formal OWL-DL ontology, DTO contains asserted and inferred axioms. Modeling data from the Library of Integrated Network-based Cellular Signatures (LINCS) program illustrates the potential of DTO for contextual data integration and nuanced definition of important drug target characteristics. DTO has been implemented in the IDG user interface Portal, Pharos and the TIN-X explorer of protein target disease relationships. CONCLUSIONS: DTO was built based on the need for a formal semantic model for druggable targets including various related information such as protein, gene, protein domain, protein structure, binding site, small molecule drug, mechanism of action, protein tissue localization, disease association, and many other types of information. DTO will further facilitate the otherwise challenging integration and formal linking to biological assays, phenotypes, disease models, drug poly-pharmacology, binding kinetics and many other processes, functions and qualities that are at the core of drug discovery. The first version of DTO is publically available via the website http://drugtargetontology.org/ , Github ( http://github.com/DrugTargetOntology/DTO ), and the NCBO Bioportal ( http://bioportal.bioontology.org/ontologies/DTO ). The long-term goal of DTO is to provide such an integrative framework and to populate the ontology with this information as a community resource.


Assuntos
Ontologias Biológicas , Biologia Computacional/métodos , Sistemas de Liberação de Medicamentos/métodos , Descoberta de Drogas/métodos , Humanos , Proteínas/classificação , Proteínas/genética , Proteínas/metabolismo , Semântica , Software
5.
Artigo em Inglês | MEDLINE | ID: mdl-27055827

RESUMO

Spinal cord injury (SCI) research is a data-rich field that aims to identify the biological mechanisms resulting in loss of function and mobility after SCI, as well as develop therapies that promote recovery after injury. SCI experimental methods, data and domain knowledge are locked in the largely unstructured text of scientific publications, making large scale integration with existing bioinformatics resources and subsequent analysis infeasible. The lack of standard reporting for experiment variables and results also makes experiment replicability a significant challenge. To address these challenges, we have developed RegenBase, a knowledge base of SCI biology. RegenBase integrates curated literature-sourced facts and experimental details, raw assay data profiling the effect of compounds on enzyme activity and cell growth, and structured SCI domain knowledge in the form of the first ontology for SCI, using Semantic Web representation languages and frameworks. RegenBase uses consistent identifier schemes and data representations that enable automated linking among RegenBase statements and also to other biological databases and electronic resources. By querying RegenBase, we have identified novel biological hypotheses linking the effects of perturbagens to observed behavioral outcomes after SCI. RegenBase is publicly available for browsing, querying and download.Database URL:http://regenbase.org.


Assuntos
Bases de Dados Factuais , Bases de Conhecimento , Traumatismos da Medula Espinal , Animais , Modelos Animais de Doenças , Humanos , Camundongos , Ratos , Pesquisa Translacional Biomédica
6.
Int J Intell Sci ; 5(1): 44-62, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26594595

RESUMO

Efficiently querying Description Logic (DL) ontologies is becoming a vital task in various data-intensive DL applications. Considered as a basic service for answering object queries over DL ontologies, instance checking can be realized by using the most specific concept (MSC) method, which converts instance checking into subsumption problems. This method, however, loses its simplicity and efficiency when applied to large and complex ontologies, as it tends to generate very large MSCs that could lead to intractable reasoning. In this paper, we propose a revision to this MSC method for DL [Formula: see text], allowing it to generate much simpler and smaller concepts that are specific enough to answer a given query. With independence between computed MSCs, scalability for query answering can also be achieved by distributing and parallelizing the computations. An empirical evaluation shows the efficacy of our revised MSC method and the significant efficiency achieved when using it for answering object queries.

7.
Artif Intell Appl ; 2(1): 8-31, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26848490

RESUMO

The extraction of logically-independent fragments out of an ontology ABox can be useful for solving the tractability problem of querying ontologies with large ABoxes. In this paper, we propose a formal definition of an ABox module, such that it guarantees complete preservation of facts about a given set of individuals, and thus can be reasoned independently w.r.t. the ontology TBox. With ABox modules of this type, isolated or distributed (parallel) ABox reasoning becomes feasible, and more efficient data retrieval from ontology ABoxes can be attained. To compute such an ABox module, we present a theoretical approach and also an approximation for SHIQ ontologies. Evaluation of the module approximation on different types of ontologies shows that, on average, extracted ABox modules are significantly smaller than the entire ABox, and the time for ontology reasoning based on ABox modules can be improved significantly.

9.
PeerJ ; 2: e524, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25165633

RESUMO

Bioinformatics and computer aided drug design rely on the curation of a large number of protocols for biological assays that measure the ability of potential drugs to achieve a therapeutic effect. These assay protocols are generally published by scientists in the form of plain text, which needs to be more precisely annotated in order to be useful to software methods. We have developed a pragmatic approach to describing assays according to the semantic definitions of the BioAssay Ontology (BAO) project, using a hybrid of machine learning based on natural language processing, and a simplified user interface designed to help scientists curate their data with minimum effort. We have carried out this work based on the premise that pure machine learning is insufficiently accurate, and that expecting scientists to find the time to annotate their protocols manually is unrealistic. By combining these approaches, we have created an effective prototype for which annotation of bioassay text within the domain of the training set can be accomplished very quickly. Well-trained annotations require single-click user approval, while annotations from outside the training set domain can be identified using the search feature of a well-designed user interface, and subsequently used to improve the underlying models. By drastically reducing the time required for scientists to annotate their assays, we can realistically advocate for semantic annotation to become a standard part of the publication process. Once even a small proportion of the public body of bioassay data is marked up, bioinformatics researchers can begin to construct sophisticated and useful searching and analysis algorithms that will provide a diverse and powerful set of tools for drug discovery researchers.

10.
J Biomed Semantics ; 5(Suppl 1 Proceedings of the Bio-Ontologies Spec Interest G): S5, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25093074

RESUMO

The lack of established standards to describe and annotate biological assays and screening outcomes in the domain of drug and chemical probe discovery is a severe limitation to utilize public and proprietary drug screening data to their maximum potential. We have created the BioAssay Ontology (BAO) project (http://bioassayontology.org) to develop common reference metadata terms and definitions required for describing relevant information of low-and high-throughput drug and probe screening assays and results. The main objectives of BAO are to enable effective integration, aggregation, retrieval, and analyses of drug screening data. Since we first released BAO on the BioPortal in 2010 we have considerably expanded and enhanced BAO and we have applied the ontology in several internal and external collaborative projects, for example the BioAssay Research Database (BARD). We describe the evolution of BAO with a design that enables modeling complex assays including profile and panel assays such as those in the Library of Integrated Network-based Cellular Signatures (LINCS). One of the critical questions in evolving BAO is the following: how can we provide a way to efficiently reuse and share among various research projects specific parts of our ontologies without violating the integrity of the ontology and without creating redundancies. This paper provides a comprehensive answer to this question with a description of a methodology for ontology modularization using a layered architecture. Our modularization approach defines several distinct BAO components and separates internal from external modules and domain-level from structural components. This approach facilitates the generation/extraction of derived ontologies (or perspectives) that can suit particular use cases or software applications. We describe the evolution of BAO related to its formal structures, engineering approaches, and content to enable modeling of complex assays and integration with other ontologies and datasets.

11.
Proc Int World Wide Web Conf ; 2014: 405-406, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25844402

RESUMO

Instance checking is considered a central tool for data retrieval from description logic (DL) ontologies. In this paper, we propose a revised most specific concept (MSC) method for DL SHI, which converts instance checking into subsumption problems. This revised method can generate small concepts that are specific-enough to answer a given query, and allow reasoning to explore only a subset of the ABox data to achieve efficiency. Experiments show effectiveness of our proposed method in terms of concept size reduction and the improvement in reasoning efficiency.

12.
PLoS One ; 7(11): e49198, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23155465

RESUMO

Huge amounts of high-throughput screening (HTS) data for probe and drug development projects are being generated in the pharmaceutical industry and more recently in the public sector. The resulting experimental datasets are increasingly being disseminated via publically accessible repositories. However, existing repositories lack sufficient metadata to describe the experiments and are often difficult to navigate by non-experts. The lack of standardized descriptions and semantics of biological assays and screening results hinder targeted data retrieval, integration, aggregation, and analyses across different HTS datasets, for example to infer mechanisms of action of small molecule perturbagens. To address these limitations, we created the BioAssay Ontology (BAO). BAO has been developed with a focus on data integration and analysis enabling the classification of assays and screening results by concepts that relate to format, assay design, technology, target, and endpoint. Previously, we reported on the higher-level design of BAO and on the semantic querying capabilities offered by the ontology-indexed triple store of HTS data. Here, we report on our detailed design, annotation pipeline, substantially enlarged annotation knowledgebase, and analysis results. We used BAO to annotate assays from the largest public HTS data repository, PubChem, and demonstrate its utility to categorize and analyze diverse HTS results from numerous experiments. BAO is publically available from the NCBO BioPortal at http://bioportal.bioontology.org/ontologies/1533. BAO provides controlled terminology and uniform scope to report probe and drug discovery screening assays and results. BAO leverages description logic to formalize the domain knowledge and facilitate the semantic integration with diverse other resources. As a consequence, BAO offers the potential to infer new knowledge from a corpus of assay results, for example molecular mechanisms of action of perturbagens.


Assuntos
Biologia Computacional/métodos , Avaliação Pré-Clínica de Medicamentos/métodos , Ensaios de Triagem em Larga Escala/métodos , Bases de Dados Factuais
13.
BMC Bioinformatics ; 12: 257, 2011 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-21702939

RESUMO

BACKGROUND: High-throughput screening (HTS) is one of the main strategies to identify novel entry points for the development of small molecule chemical probes and drugs and is now commonly accessible to public sector research. Large amounts of data generated in HTS campaigns are submitted to public repositories such as PubChem, which is growing at an exponential rate. The diversity and quantity of available HTS assays and screening results pose enormous challenges to organizing, standardizing, integrating, and analyzing the datasets and thus to maximize the scientific and ultimately the public health impact of the huge investments made to implement public sector HTS capabilities. Novel approaches to organize, standardize and access HTS data are required to address these challenges. RESULTS: We developed the first ontology to describe HTS experiments and screening results using expressive description logic. The BioAssay Ontology (BAO) serves as a foundation for the standardization of HTS assays and data and as a semantic knowledge model. In this paper we show important examples of formalizing HTS domain knowledge and we point out the advantages of this approach. The ontology is available online at the NCBO bioportal http://bioportal.bioontology.org/ontologies/44531. CONCLUSIONS: After a large manual curation effort, we loaded BAO-mapped data triples into a RDF database store and used a reasoner in several case studies to demonstrate the benefits of formalized domain knowledge representation in BAO. The examples illustrate semantic querying capabilities where BAO enables the retrieval of inferred search results that are relevant to a given query, but are not explicitly defined. BAO thus opens new functionality for annotating, querying, and analyzing HTS datasets and the potential for discovering new knowledge by means of inference.


Assuntos
Bases de Dados Factuais , Ensaios de Triagem em Larga Escala , Bioensaio/métodos , Semântica , Vocabulário Controlado
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...